人工智能的最新趋势是将验证的模型用于语言和视觉任务,这些模型已经实现了非凡的表现,但也令人困惑。因此,以各种方式探索这些模型的能力对该领域至关重要。在本文中,我们探讨了模型的可靠性,在其中我们将可靠的模型定义为一个不仅可以实现强大的预测性能,而且在许多涉及不确定性(例如选择性预测,开放式设置识别)的决策任务上,在许多决策任务上表现出色,而且表现良好。强大的概括(例如,准确性和适当的评分规则,例如在分布数据集中和分发数据集上的对数可能性)和适应性(例如,主动学习,几乎没有射击不确定性)。我们设计了40个数据集的10种任务类型,以评估视觉和语言域上可靠性的不同方面。为了提高可靠性,我们分别开发了VIT-PLEX和T5-PLEX,分别针对视觉和语言方式扩展了大型模型。 PLEX极大地改善了跨可靠性任务的最先进,并简化了传统协议,因为它可以改善开箱即用的性能,并且不需要设计分数或为每个任务调整模型。我们演示了高达1B参数的模型尺寸的缩放效果,并预处理数据集大小最多4B示例。我们还展示了PLEX在具有挑战性的任务上的功能,包括零射门的开放式识别,主动学习和对话语言理解中的不确定性。
translated by 谷歌翻译
监督的学习数据集通常具有特权信息,以培训时间可用但在测试时间无法使用的功能形式,例如提供标签的注释者的ID。我们认为特权信息对于解释标签噪声很有用,从而减少了嘈杂标签的有害影响。我们开发了一种简单有效的方法,用于通过神经网络进行监督学习:它通过与特权信息共享知识的权重转移,并在测试时大约在特权信息上进行边缘化。我们的方法,电车(转移和边缘化),其开销时间很少,并且具有与不使用特权信息相同的测试时间成本。电车在CIFAR-10H,ImageNet和Civil评论基准测试方面表现出色。
translated by 谷歌翻译
最近,深度学习中的不确定性估计已成为提高安全至关重要应用的可靠性和鲁棒性的关键领域。尽管有许多提出的方法要么关注距离感知模型的不确定性,要么是分布式检测的不确定性,要么是针对分布校准的输入依赖性标签不确定性,但这两种类型的不确定性通常都是必要的。在这项工作中,我们提出了用于共同建模模型和数据不确定性的HETSNGP方法。我们表明,我们提出的模型在这两种类型的不确定性之间提供了有利的组合,因此在包括CIFAR-100C,ImagEnet-C和Imagenet-A在内的一些具有挑战性的分发数据集上优于基线方法。此外,我们提出了HETSNGP Ensemble,这是我们方法的结合版本,该版本还对网络参数的不确定性进行建模,并优于其他集合基线。
translated by 谷歌翻译
对不确定度和鲁棒性的高质量估计对于众多现实世界的应用来说至关重要,特别是对于深入学习,这是利用许多部署的ML系统。因此,比较改善这些估计的技术的能力对于研究和实践相似非常重要。然而,由于一系列原因,通常缺乏方法的竞争比较,包括:计算广泛调整的可用性,加入足够多的基线,以及用于再现性的具体文件。在本文中,我们介绍了不确定性的基线:在各种任务中的标准和最先进的深度学习方法的高质量实现。从本撰写中,集合跨越9项方法,每个方法都有至少5个度量。每个基线都是一个独立的实验管道,易于可重复使用和可伸缩的部件。我们的目标是提供具有新方法或应用的实验的即时出发点。此外,我们还提供模型检查点,实验输出为Python笔记本,以及用于比较结果的排行榜。代码在https://github.com/google/uncertainty-baselines。
translated by 谷歌翻译
Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
translated by 谷歌翻译
Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.
translated by 谷歌翻译
Recent pre-trained language models have shown promising capabilities in generating fluent and realistic natural language text. However, generating multi-sentence text with global content planning has been a long-existing research question. Current approaches for controlled text generation can hardly address this issue, as they usually condition on single known control attributes. In this study, we propose a low-cost yet effective framework which explicitly models the global content plan of the generated text. Specifically, it optimizes the joint distribution of the natural language sequence and the global content plan in a plug-and-play manner. We conduct extensive experiments on the well-established Recipe1M+ benchmark. Both automatic and human evaluations verify that our model achieves the state-of-the-art performance on the task of recipe generation
translated by 谷歌翻译
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks. These methods achieve surprisingly good performance and are shown to be more stable than their corresponding fully fine-tuned counterparts. However, such kind of methods is still not well understood. Some natural questions arise: How does the parameter sparsity lead to promising performance? Why is the model more stable than the fully fine-tuned models? How to choose the tunable parameters? In this paper, we first categorize the existing methods into random approaches, rule-based approaches, and projection-based approaches based on how they choose which parameters to tune. Then, we show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them. We indicate that the sparsity is actually imposing a regularization on the original model by controlling the upper bound of the stability. Such stability leads to better generalization capability which has been empirically observed in a lot of recent research works. Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters. To better choose the tunable parameters, we propose a novel Second-order Approximation Method (SAM) which approximates the original problem with an analytically solvable optimization function. The tunable parameters are determined by directly optimizing the approximation function. The experimental results show that our proposed SAM model outperforms many strong baseline models and it also verifies our theoretical analysis.
translated by 谷歌翻译
在原始文本中训练的语言模型(LMS)无法直接访问物理世界。 Gordon和Van Durme(2013)指出,LMS因此可能会遭受报告偏见的困扰:文本很少报告常见事实,而是关注情况的异常方面。如果LMS仅接受文本语料库的培训,并天真地记住当地的同时出现统计数据,那么他们自然会学会对物理世界的偏见。虽然先前的研究反复验证了较小尺度的LM(例如Roberta,GPT-2)放大了报告偏差,但在模型扩展时,这种趋势是否继续。我们从较大语言模型(LLM)(例如Palm和GPT-3)中从颜色的角度研究报告偏见。具体而言,我们查询llms对物体的典型颜色,这是一种简单的感知扎根的物理常识。令人惊讶的是,我们发现LLM在确定对象的典型颜色和更紧密地跟踪人类判断方面的表现明显优于较小的LMS,而不是过于适应文本中存储的表面图案。这表明,仅凭语言的大型语言模型就能克服以局部共发生为特征的某些类型的报告偏差。
translated by 谷歌翻译
神经网络语言模型的最新进展表明,通过利用大规模自然语言数据中的语言关联来得出表达意义表示。这些潜在的格式塔表示已实现许多实际应用的最新性能。看来我们正处于经验得出强大而表达的可计算语义的途径。出现的一个关键问题是,仅语言数据才能使计算机能够理解有关物理世界的必要真相?必须关注这个问题,因为我们与智能机器的未来相互作用取决于我们的技术正确地表示和处理人类通常观察到的概念(对象,属性和过程)。在审查了现有协议之后,这项工作的目的是使用新颖且严格控制的推理测试探索这个问题,并突出显示哪些模型可能直接从纯语言数据中学习。
translated by 谷歌翻译